Optimal Control for Industrial Sucrose Crystallization with Action Dependent Heuristic Dynamic Programming

نویسندگان
چکیده

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Action-Dependent Heuristic Dynamic Programming for Neurooptimization

We propose a new approach to solving optimization problems based on an adaptive critic design called Action-Dependent Heuristic Dynamic Programming (ADHDP) [1]. We consider the well-known Traveling Salesman Problem (TSP) as a helpful benchmark to highlight main principles of our approach. TSP formulation and many methods to solve it including the classical sequential nearest city algorithm and ...

متن کامل

Application of Action Dependent Heuristic Dynamic Programming to Control an Industrial Waste Incineration Plant

In this paper, we describe our application of a neurocontroller based on Action Dependent Heuristic Dynamic Programming (ADHDP) to optimize the combustion-process for an industrial hazardous waste incineration plant. This ADHDP-controller originally was designed for online learning. That implies, that this controller starts with a randomly initialized policy and improves its performance while i...

متن کامل

Action dependent heuristic dynamic programming for home energy resource scheduling

Energy management in smart home environment is nowadays a crucial aspect on which technologies have been focusing on in order to save costs and minimize energy waste. This goal can be reached by means of an energy resource scheduling strategy provided by a suitable optimization technique. The proposed solution involves a class of Adaptive Critic Designs (ACDs) called Action Dependent Heuristic ...

متن کامل

Heuristic Dynamic Programming Nonlinear Optimal Controller

This chapter is concerned with the application of approximate dynamic programming techniques (ADP) to solve for the value function, and hence the optimal control policy, in discrete-time nonlinear optimal control problems having continuous state and action spaces. ADP is a reinforcement learning approach (Sutton & Barto, 1998) based on adaptive critics (Barto et al., 1983), (Widrow et al., 1973...

متن کامل

Stochastic Dynamic Programming with Markov Chains for Optimal Sustainable Control of the Forest Sector with Continuous Cover Forestry

We present a stochastic dynamic programming approach with Markov chains for optimal control of the forest sector. The forest is managed via continuous cover forestry and the complete system is sustainable. Forest industry production, logistic solutions and harvest levels are optimized based on the sequentially revealed states of the markets. Adaptive full system optimization is necessary for co...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: International Journal of Image, Graphics and Signal Processing

سال: 2009

ISSN: 2074-9074,2074-9082

DOI: 10.5815/ijigsp.2009.01.05